Goto

Collaborating Authors

 student and teacher


His students suddenly started getting A's. Did a Google AI tool go too far?

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. His students suddenly started getting A's. Did a Google AI tool go too far? Google's Lens tool on Chromebooks can mean it easier for students to cheat with one click, prompting teachers to question how they can maintain academic integrity. Over 70% of teachers worry AI tools are preventing students from developing critical thinking and writing skills.


A Appendix A.1 Implementation of DIST

Neural Information Processing Systems

This section presents the implementation code of DIST, as shown in Figure 4. The purpose of these methods is to learn the similarity relationships between instances from teacher, e.g., the semantic spaces of instances with the same KD methods, which help us to achieve better performance especially when the student is trained with a stronger teacher. Here we conduct experiments to investigate the efficacy of our method with cosine similarity. As discussed in our main text, the matching functions such as KL divergence and MSE are used to match the outputs between student and teacher in KD.


"Would You Want an AI Tutor?" Understanding Stakeholder Perceptions of LLM-based Chatbots in the Classroom

Fuligni, Caterina, Figaredo, Daniel Dominguez, Stoyanovich, Julia

arXiv.org Artificial Intelligence

In recent years, Large Language Models (LLMs) rapidly gained popularity across all parts of society, including education. After initial skepticism and bans, many schools have chosen to embrace this new technology by integrating it into their curricula in the form of virtual tutors and teaching assistants. However, neither the companies developing this technology nor the public institutions involved in its implementation have set up a formal system to collect feedback from the stakeholders impacted by them. In this paper, we argue that understanding the perceptions of those directly affected by LLMS in the classroom, such as students and teachers, as well as those indirectly impacted, like parents and school staff, is essential for ensuring responsible use of AI in this critical domain. Our contributions are two-fold. First, we present results of a literature review focusing on the perceptions of LLM-based chatbots in education. We highlight important gaps in the literature, such as the exclusion of key educational agents (e.g., parents or school administrators) when analyzing the role of stakeholders, and the frequent omission of the learning contexts in which the AI systems are implemented. Thus, we present a taxonomy that organizes existing literature on stakeholder perceptions. Second, we propose the Contextualized Perceptions for the Adoption of Chatbots in Education (Co-PACE) framework, which can be used to systematically elicit perceptions and inform whether and how LLM-based chatbots should be designed, developed, and deployed in the classroom.


Student-Informed Teacher Training

Messikommer, Nico, Xing, Jiaxu, Aljalbout, Elie, Scaramuzza, Davide

arXiv.org Artificial Intelligence

Our method leverages three networks (a), which are trained in three alternating phases: the roll-out phase (b), the policy update phase (c), and the alignment phase (d). The grey boxes represent networks frozen during the specific phase and the dashed arrows indicate the gradient flow. Imitation learning with a privileged teacher has proven effective for learning complex control behaviors from high-dimensional inputs, such as images. In this framework, a teacher is trained with privileged task information, while a student tries to predict the actions of the teacher with more limited observations, e.g., in a robot navigation task, the teacher might have access to distances to nearby obstacles, while the student only receives visual observations of the scene. However, privileged imitation learning faces a key challenge: the student might be unable to imitate the teacher's behavior due to partial observability. This problem arises because the teacher is trained without considering if the student is capable of imitating the learned behavior. To address this teacher-student asymmetry, we propose a framework for joint training of the teacher and student policies, encouraging the teacher to learn behaviors that can be imitated by the student despite the latters' limited access to information and its partial observability. Based on the performance bound in imitation learning, we add (i) the approximated action difference between teacher and student as a penalty term to the reward function of the teacher, and (ii) a supervised teacher-student alignment step. We motivate our method with a maze navigation task and demonstrate its effectiveness on complex vision-based quadrotor flight and manipulation tasks. In reinforcement learning (RL), an agent learns to perform a task by interacting with its environment and maximizing the cumulative rewards gained through these interactions. This work was supported by the European Research Council (ERC) under grant agreement No. 864042 (AGILEFLIGHT) However, this process requires extensive exploration, as the agent must avoid getting trapped in local minima, often resulting in a large number of environment interactions (Pathak et al., 2017). The number of interactions is even further increased when the agent processes high-dimensional data as input (Ota et al., 2020). Using such observations, the policy must learn to extract a notion of the agent's state, a process that is computationally expensive when optimized solely through RL.


Elementary School Students' and Teachers' Perceptions Towards Creative Mathematical Writing with Generative AI

Song, Yukyeong, Kim, Jinhee, Xing, Wanli, Liu, Zifeng, Li, Chenglu, Oh, Hyunju

arXiv.org Artificial Intelligence

While mathematical creative writing can potentially engage students in expressing mathematical ideas in an imaginative way, some elementary school-age students struggle in this process. Generative AI (GenAI) offers possibilities for supporting creative writing activities, such as providing story generation. However, the design of GenAI-powered learning technologies requires careful consideration of the technology reception in the actual classrooms. This study explores students' and teachers' perceptions of creative mathematical writing with the developed GenAI-powered technology. The study adopted a qualitative thematic analysis of the interviews, triangulated with open-ended survey responses and classroom observation of 79 elementary school students, resulting in six themes and 19 subthemes. This study contributes by investigating the lived experience of GenAI-supported learning and the design considerations for GenAI-powered learning technologies and instructions.


Representational Alignment Supports Effective Machine Teaching

Sucholutsky, Ilia, Collins, Katherine M., Malaviya, Maya, Jacoby, Nori, Liu, Weiyang, Sumers, Theodore R., Korakakis, Michalis, Bhatt, Umang, Ho, Mark, Tenenbaum, Joshua B., Love, Brad, Pardos, Zachary A., Weller, Adrian, Griffiths, Thomas L.

arXiv.org Artificial Intelligence

A good teacher should not only be knowledgeable; but should be able to communicate in a way that the student understands -- to share the student's representation of the world. In this work, we integrate insights from machine teaching and pragmatic communication with the burgeoning literature on representational alignment to characterize a utility curve defining a relationship between representational alignment and teacher capability for promoting student learning. To explore the characteristics of this utility curve, we design a supervised learning environment that disentangles representational alignment from teacher accuracy. We conduct extensive computational experiments with machines teaching machines, complemented by a series of experiments in which machines teach humans. Drawing on our findings that improved representational alignment with a student improves student learning outcomes (i.e., task accuracy), we design a classroom matching procedure that assigns students to teachers based on the utility curve. If we are to design effective machine teachers, it is not enough to build teachers that are accurate -- we want teachers that can align, representationally, to their students too.


The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and Millennial Generation teachers?

Chan, Cecilia Ka Yuk, Lee, Katherine K. W.

arXiv.org Artificial Intelligence

The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and Millennial Generation teachers? Abstract This study aimed to explore the experiences, perceptions, knowledge, concerns, and intentions of Gen Z students with Gen X and Gen Y teachers regarding the use of generative AI (GenAI) in higher education. A sample of students and teachers were recruited to investigate the above using a survey consisting of both open and closed questions. The findings showed that Gen Z participants were generally optimistic about the potential benefits of GenAI, including enhanced productivity, efficiency, and personalized learning, and expressed intentions to use GenAI for various educational purposes. Gen X and Gen Y teachers acknowledged the potential benefits of GenAI but expressed heightened concerns about overreliance, ethical and pedagogical implications, emphasizing the need for proper guidelines and policies to ensure responsible use of the technology. The study highlighted the importance of combining technology with traditional teaching methods to provide a more effective learning experience. Implications of the findings include the need to develop evidence-based guidelines and policies for GenAI integration, foster critical thinking and digital literacy skills among students, and promote responsible use of GenAI technologies in higher education. Keywords: ChatGPT; Generative AI; AI Literacy; Risks; Advantages; Holistic competencies; Challenges; Benefits 1. Introduction Generation Z (Gen Z) students have largely replaced Millennials in undergraduate programmes, with institutions of higher education now primarily enrolling students from the former (Seemiller & Grace, 2016; Shatto & Erwin, 2016). With educators welcoming a new cohort of students to campus, there is a growing concern regarding how to effectively teach this'always-on' generation; for example, a study by Pearson (2018) showed that almost half of all Gen Z-ers (47%) spend a minimum of three hours daily on YouTube. The Gen Z population, much like its predecessors - the Silent and Baby Boomer generations, followed by Generation X (Gen X) and Generation Y (also known as Millennials) - has its own unique, distinct characteristics that have been shaped by information communication technologies, social and cultural shifts, and financial volatility. As such, it is crucial for higher education institutions to effectively engage with Gen Z, in order for scholars, teachers, and university staff to understand their aforementioned characteristics (Seemiller & Grace, 2017; Shatto & Erwin, 2016; Shorey et al., 2021) and in turn, effectively and ethically integrate generative AI (GenAI) technologies into the curriculum.


A Comprehensive AI Policy Education Framework for University Teaching and Learning

Chan, Cecilia Ka Yuk

arXiv.org Artificial Intelligence

This study aims to develop an AI education policy for higher education by examining the perceptions and implications of text generative AI technologies. Data was collected from 457 students and 180 teachers and staff across various disciplines in Hong Kong universities, using both quantitative and qualitative research methods. Based on the findings, the study proposes an AI Ecological Education Policy Framework to address the multifaceted implications of AI integration in university teaching and learning. This framework is organized into three dimensions: Pedagogical, Governance, and Operational. The Pedagogical dimension concentrates on using AI to improve teaching and learning outcomes, while the Governance dimension tackles issues related to privacy, security, and accountability. The Operational dimension addresses matters concerning infrastructure and training. The framework fosters a nuanced understanding of the implications of AI integration in academic settings, ensuring that stakeholders are aware of their responsibilities and can take appropriate actions accordingly. Keywords: AI Policy Framework; Artificial Intelligence; ChatGPT; Ethics; Assessment 1. Introduction In recent months, there has been a growing concern in the academic settings about the use of text generative artificial intelligence (AI), such as ChatGPT, Bing and the latest, Co-Pilot integrated within the Microsoft Office suite. One of the main concerns is that students may use generative AI tools to cheat or plagiarise their written assignments and exams. In fact, a recent survey of university students found that nearly one in three students had used a form of AI, such as essay-generating software, to complete their coursework (Intelligent.com, About one-third of college students surveyed (sample size 1000) in the US have utilized the AI chatbot such as ChatGPT to complete written homework assignments, with 60% using the programme on more than half of their assignments. ChatGPT types of generative AI tools is capable of imitating human writing, with some students using it to cheat. The study found that 75% of students believe that using the programme for cheating is wrong but still do it, and nearly 30% believe their professors are unaware of their use of the tool. The study also noted that some professors are considering whether to include ChatGPT in their lessons or join calls to ban it, with 46% of students saying their professors or institutions have banned the tool for homework. This has led to calls for stricter regulations and penalties for academic misconduct involving AI.


5 tips for maintaining teacher-student trust as AI classroom use grows

#artificialintelligence

As artificial intelligence-assisted technology increases in K-12 instruction and learning, many educators and education businesses see opportunities and potential for the tools -- including enhanced instruction that can be personalized for individual students and efficiencies in conducting student or teacher-led research. Others, however, hold concerns about the potential for cheating or false accusations of cheating, as well as overuse or inappropriate uses for AI systems, which use large amounts of analyzed data to make predictions and perform tasks. "We've done this over the decades because technologies, when they're first introduced, we either say that they are going to be detrimental or they're going to be lifesaving," said Shelley Pasnik, senior advisor to the Center for Children and Technology, a nonprofit that researches technology's influences on teaching and learning. In fact, the use of technology in classrooms, including AI, can be much more complex because humans are actually guiding the application of the technology, said Pasnik, who is also senior vice president at Education Development Center, a nonprofit that designs, implements, and evaluates programs to improve education. The Center for Children and Technology is affiliated with the Education Development Center.


You're Not Going to Like How Colleges Respond to That Chatbot That Writes Papers

Slate

In the classroom of the future--if there still are any--it's easy to imagine the endpoint of an arms race: an artificial intelligence that generates the day's lessons and prompts, a student-deployed A.I. that will surreptitiously do the assignment, and finally, a third-party A.I. that will determine if any of the pupils actually did the work with their own fingers and brain. Loop complete; no humans needed. If you were to take all the hype about ChatGPT at face value, this might feel inevitable. But a response to the hit software demo, released by OpenAI in November to instant fanfare, is coming. You only have to look at how schools dealt with the potential externalities of newly essential tech during the pandemic to see how a similarly paranoid reaction to chatbots like ChatGPT could go--and how it shouldn't. When schools had to shift on the fly to remote learning three years ago, there was a massive turn to what at that point was mainly enterprise software: Zoom.